118 research outputs found

    Network calculus for parallel processing

    Full text link
    In this note, we present preliminary results on the use of "network calculus" for parallel processing systems, specifically MapReduce

    Using Performance Forecasting to Accelerate Elasticity

    Get PDF
    Cloud computing facilitates dynamic resource provisioning. The automation of resource management, known as elasticity, has been subject to much research. In this context, monitoring of a running service plays a crucial role, and adjustments are made when certain thresholds are crossed. On such occasions, it is common practice to simply add or remove resources. In this paper we investigate how we can predict the performance of a service to dynamically adjust allocated resources based on predictions. In other words, instead of “repairing” because a threshold has been crossed, we attempt to stay ahead and allocate an optimized amount of resources in advance. To do so, we need to have accurate predictive models that are based on workloads. We present our approach, based on the Universal Scalability Law, and discuss initial experiments

    PULP: an Adaptive Gossip-Based Dissemination Protocol for Multi-Source Message Streams

    Get PDF
    Gossip-based protocols provide a simple, scalable, and robust way to disseminate messages in large-scale systems. In such protocols, messages are spread in an epidemic manner. Gossiping may take place between nodes using push, pull, or a combination. Push-based systems achieve reasonable latency and high resilience to failures but may impose an unnecessarily large redundancy and overhead on the system. At the other extreme, pull-based protocols impose a lower overhead on the network at the price of increased latencies. A few hybrid approaches have been proposed-typically pushing control messages and pulling data-to avoid the redundancy of high-volume content and single-source streams. Yet, to the best of our knowledge, no other system intermingles push and pull in a multiple-senders scenario, in such a way that data messages of one help in carrying control messages of the other and in adaptively adjusting its rate of operation, further reducing overall cost and improving both on delays and robustness. In this paper, we propose an efficient generic push-pull dissemination protocol, Pulp, which combines the best of both worlds. Pulp exploits the efficiency of push approaches, while limiting redundant messages and therefore imposing a low overhead, as pull protocols do. Pulp leverages the dissemination of multiple messages from diverse sources: by exploiting the push phase of messages to transmit information about other disseminations, Pulp enables an efficient pulling of other messages, which themselves help in turn with the dissemination of pending messages. We deployed Pulp on a cluster and on PlanetLab. Our results demonstrate that Pulp achieves an appealing trade-off between coverage, message redundancy, and propagation delay. © 2011 Springer Science+Business Media, LLC

    Cataclysm: Scalable overload policing for internet applications

    No full text
    In this paper we present the Cataclysm server platform for handling extreme overloads in hosted Internet applications. The primary contribution of our work is to develop a low overhead, highly scalable admission control technique for Internet applications. Cataclysm provides several desirable features, such as guarantees on response time by conducting accurate size-based admission control, revenue maximization at multiple time-scales via preferential admission of important requests and dynamic capacity provisioning, and the ability to be operational even under extreme overloads. Cataclysm can transparently tradeoff the accuracy of its decision making with the intensity of the workload allowing it to handle incoming rates of several tens of thousands of requests/second. We implement a prototype Cataclysm hosting platform on a Linux cluster and demonstrate the benefits of our integrated approach using a variety of workloads
    corecore